skip to main content


Search for: All records

Creators/Authors contains: "Kroemer, O."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where the articulated parts are moving according to their kinematic constraints. In this work, we propose FormNet, a neural network that identifies the articulation mechanisms between pairs of object parts from a single frame of an RGB-D image and segmentation masks. The network is trained on 100k synthetic images of 149 articulated objects from 6 categories. Synthetic images are rendered via a photorealistic simulator with domain randomization. Our proposed model predicts motion residual flows of object parts, and these flows are used to determine the articulation type and parameters. The network achieves an articulation type classification accuracy of 82.5% on novel object instances in trained categories. Experiments also show how this method enables generalization to novel categories and can be applied to real-world images without fine-tuning. 
    more » « less
  2. null (Ed.)
    To perform manipulation tasks in the real world, robots need to operate on objects with various shapes, sizes and without access to geometric models. To achieve this it is often infeasible to train monolithic neural network policies across such large variations in object properties. Towards this generalization challenge, we propose to learn modular task policies which compose object-centric task-axes controllers. These task-axes controllers are parameterized by properties associated with underlying objects in the scene. We infer these controller parameters directly from visual input using multi- view dense correspondence learning. Our overall approach provides a simple and yet powerful framework for learning manipulation tasks. We empirically evaluate our approach on 3 different manipulation tasks and show its ability to generalize to large variance in object size, shape and geometry. 
    more » « less
  3. null (Ed.)
    Humans leverage the dynamics of the environment and their own bodies to accomplish challenging tasks such as grasping an object while walking past it or pushing off a wall to turn a corner. Such tasks often involve switching dynamics as the robot makes and breaks contact. Learning these dynamics is a challenging problem and prone to model inaccuracies, especially near contact regions. In this work, we present a framework for learning composite dynamical behaviors from expert demonstrations. We learn a switching linear dynamical model with contacts encoded in switching conditions as a close approximation of our system dynamics. We then use discrete-time LQR as the differentiable policy class for data-efficient learning of control to develop a control strategy that operates over multiple dynamical modes and takes into account discontinuities due to contact. In addition to predicting interactions with the environment, our policy effectively reacts to inaccurate predictions such as unanticipated contacts. Through simulation and real world experiments, we demonstrate generalization of learned behaviors to different scenarios and robustness to model inaccuracies during execution. 
    more » « less
  4. null (Ed.)
    Distributed manipulators - consisting of a set of actuators or robots working cooperatively to achieve a manipulation task - are robust and flexible tools for performing a range of planar manipulation skills. One novel example is the delta array, a distributed manipulator composed of a grid of delta robots, capable of performing dexterous manipulation tasks using strategies incorporating both dynamic and static contact. Hand-designing effective distributed control policies for such a manipulator can be complex and time consuming, given the high-dimensional action space and unfamiliar system dynamics. In this paper, we examine the principles guiding development and control of such a delta array for a planar translation task. We explore policy learning as a robust cooperative control approach, allowing for smooth manipulation of a range of objects, showing improved accuracy and efficiency over baseline human-designed policies. 
    more » « less
  5. Siciliano, B. ; Laschi, C. ; Khatib, O. (Ed.)
    We design a compliant delta manipulator using 3D-printing and soft materials. Our design is different from the traditionally rigid delta robots as it is more accessible through low-cost 3D-printing, and can interact safely with its surroundings due to compliance. This work focuses on parallelogram links which are a key component of the delta robot design. We characterize these links over twelve dimensional parameters, such as beam and hinge thickness, and two material stiffness settings by displacing them, and observing the resulting forces and rotation angles. The parallelogram links are then integrated into a delta robot structure to test for delta mechanism behavior, which keeps the end-effector parallel to the base of the robot. We observed that using compliant hinges resulted in near-delta behavior, laying the groundwork for fabricating and utilizing 3D-printed compliant delta manipulators. 
    more » « less
  6. null (Ed.)
    Traditional parallel-jaw grippers are insufficient for delicate object manipulation due to their stiffness and lack of dexterity. Other dexterous robotic hands often have bulky fingers, rely on complex time-varying cable drives, or are prohibitively expensive. In this paper, we introduce a novel low-cost compliant gripper with two centimeter-scaled 3-DOF delta robots using off-the-shelf linear actuators and 3D-printed soft materials. To model the kinematics of delta robots with soft compliant links, which diverge from typical rigid links, we train neural networks using a perception system. Furthermore, we analyze the delta robot’s force profile by varying the starting position in its workspace and measuring the resulting force from a push action. Finally, we demonstrate the compliance and dexterity of our gripper through six dexterous manipulation tasks involving small and delicate objects. Thus, we present the groundwork for creating modular multi-fingered hands that can execute precise and low-inertia manipulations. 
    more » « less
  7. null (Ed.)
  8. A key challenge in intelligent robotics is creating robots that are capable of directly interacting with the world around them to achieve their goals. The last decade has seen substantial growth in research on the problem of robot manipulation, which aims to exploit the increasing availability of affordable robot arms and grippers to create robots capable of directly interacting with the world to achieve their goals. Learning will be central to such autonomous systems, as the real world contains too much variation for a robot to expect to have an accurate model of its environment, the objects in it, or the skills required to manipulate them, in advance. We aim to survey a representative subset of that research which uses machine learning for manipulation. We describe a formalization of the robot manipulation learning problem that synthesizes existing research into a single coherent framework and highlight the many remaining research opportunities and challenges. 
    more » « less
  9. null (Ed.)
  10. null (Ed.)
    Manipulation tasks can often be decomposed into multiple subtasks performed in parallel, e.g., sliding an object to a goal pose while maintaining con- tact with a table. Individual subtasks can be achieved by task-axis controllers defined relative to the objects being manipulated, and a set of object-centric controllers can be combined in an hierarchy. In prior works, such combinations are defined manually or learned from demonstrations. By contrast, we propose using reinforcement learning to dynamically compose hierarchical object-centric controllers for manipulation tasks. Experiments in both simulation and real world show how the proposed approach leads to improved sample efficiency, zero-shot generalization to novel test environments, and simulation-to-reality transfer with- out fine-tuning. 
    more » « less